8 research outputs found

    A design of license plate recognition system using convolutional neural network

    Get PDF
    This paper proposes an improved Convolutional Neural Network (CNN) algorithm approach for license plate recognition system. The main contribution of this work is on the methodology to determine the best model for four-layered CNN architecture that has been used as the recognition method. This is achieved by validating the best parameters of the enhanced Stochastic Diagonal Levenberg Marquardt (SDLM) learning algorithm and network size of CNN. Several preprocessing algorithms such as Sobel operator edge detection, morphological operation and connected component analysis have been used to localize the license plate, isolate and segment the characters respectively before feeding the input to CNN. It is found that the proposed model is superior when subjected to multi-scaling and variations of input patterns. As a result, the license plate preprocessing stage achieved 74.7% accuracy and CNN recognition stage achieved 94.6% accuracy

    A multi-biometric iris recognition system based on a deep learning approach

    Get PDF
    YesMultimodal biometric systems have been widely applied in many real-world applications due to its ability to deal with a number of significant limitations of unimodal biometric systems, including sensitivity to noise, population coverage, intra-class variability, non-universality, and vulnerability to spoofing. In this paper, an efficient and real-time multimodal biometric system is proposed based on building deep learning representations for images of both the right and left irises of a person, and fusing the results obtained using a ranking-level fusion method. The trained deep learning system proposed is called IrisConvNet whose architecture is based on a combination of Convolutional Neural Network (CNN) and Softmax classifier to extract discriminative features from the input image without any domain knowledge where the input image represents the localized iris region and then classify it into one of N classes. In this work, a discriminative CNN training scheme based on a combination of back-propagation algorithm and mini-batch AdaGrad optimization method is proposed for weights updating and learning rate adaptation, respectively. In addition, other training strategies (e.g., dropout method, data augmentation) are also proposed in order to evaluate different CNN architectures. The performance of the proposed system is tested on three public datasets collected under different conditions: SDUMLA-HMT, CASIA-Iris- V3 Interval and IITD iris databases. The results obtained from the proposed system outperform other state-of-the-art of approaches (e.g., Wavelet transform, Scattering transform, Local Binary Pattern and PCA) by achieving a Rank-1 identification rate of 100% on all the employed databases and a recognition time less than one second per person

    Convolutional neural networks with fused layers applied to face recognition

    No full text
    In this paper, we propose an effective convolutional neural network (CNN) model to the problem of face recognition. The proposed CNN architecture applies fused convolution/subsampling layers that result in a simpler model with fewer network parameters; that is, a smaller number of neurons, trainable parameters, and connections. In addition, it does not require any complex or costly image preprocessing steps that are typical in existing face recognizer systems. In this work, we enhance the stochastic diagonal Levenberg–Marquardt algorithm, a second-order back-propagation algorithm to obtain faster network convergence and better generalization ability. Experimental work completed on the ORL database shows that a recognition accuracy of 100% is achieved, with the network converging within 15 epochs. The average processing time of the proposed CNN face recognition solution, executed on a 2.5 GHz Intel i5 quad-core processor, is 3 s per epoch, with a recognition speed of less than 0.003 s. These results show that the proposed CNN model is a computationally efficient architecture that exhibits faster processing and learning times, and also produces higher recognition accuracy, outperforming other existing work on face recognizers based on neural networks

    Automated Classification of Stroke Lesion Using Bagged Tree Classifier

    No full text
    Stroke is a "brain attack" that often causes paralysis, resulted from either bleeding in the brain (hemorrhagic) or the blockage of blood flow to the brain (ischemic). It posed a big challenge to Malaysian healthcare services with at least 32 deaths per day, while survivors were burdened with multiple problems. Conventionally, the diagnosis is performed manually by neuroradiologists during a highly subjective and time consuming tasks. Therefore, this paper intends to diagnose and classify stroke by investigating diffusion- weighted imaging (DWI) of brain stroke images using Bagged Tree classification. Stroke is classified into three main types which are acute stroke, chronic stroke and hemorrhage stroke. The performance of the proposed method is then verified using accuracy and Area Under the Curve (AUC). Based on the results, the overall accuracy for the classification is 96.7%. The AUC of each type of stroke for acute stroke, chronic stroke and hemorrhage stroke is 97%, 100% and 99%, respectively. This outcome could serve as an insight to improve the healthcare of the community by providing better solutions using such intelligent system

    Handbook of Vascular Biometrics / Improved CNN-segmentation-based finger vein recognition using automatically generated and fused training labels

    No full text
    We utilise segmentation-oriented CNNs to extract vein patterns from near-infrared finger imagery and use them as the actual vein features in biometric finger vein recognition. As the process to manually generate ground-truth labels required to train the networks is extremely time-consuming and error prone, we propose several models to automatically generate training data, eliminating the needs for manually annotated labels. Furthermore, we investigate label fusion between such labels and manually generated labels. Based on our experiments, the proposed methods are also able to improve the recognition performance of CNN-network-based feature extraction up to different extents.(VLID)460231
    corecore